# Relative Position Embedding

Wav2vec2 Conformer Rel Pos Large 100h Ft
Apache-2.0
A large-scale Wav2Vec2-Conformer speech recognition model using relative position embedding, fine-tuned on 100 hours of Librispeech data
Speech Recognition Transformers English
W
facebook
99
0
Tapas Large Finetuned Wtq
Apache-2.0
TAPAS is a table question answering model based on the BERT architecture, pre-trained in a self-supervised manner on Wikipedia table data, supporting natural language question answering on table content
Question Answering System Transformers English
T
google
124.85k
141
Beit Large Patch16 224 Pt22k Ft22k
Apache-2.0
BEiT is a Vision Transformer (ViT)-based image classification model, pre-trained in a self-supervised manner on ImageNet-22k and fine-tuned on the same dataset.
Image Classification
B
microsoft
1,880
5
Tapas Tiny
Apache-2.0
TAPAS is a Transformer-based table question answering model pre-trained in a self-supervised manner on English Wikipedia table data, supporting table QA and entailment tasks.
Large Language Model Transformers English
T
google
44
0
Tapas Base Finetuned Sqa
Apache-2.0
A table question answering model based on BERT architecture, enhanced with intermediate pretraining for numerical reasoning, fine-tuned on the SQA dataset
Question Answering System Transformers English
T
google
1,867
6
Beit Base Patch16 224 Pt22k
Apache-2.0
BEiT is a vision Transformer-based model pre-trained on the ImageNet-21k dataset through self-supervised learning for image classification tasks.
Image Classification
B
microsoft
2,647
3
Tapas Small Finetuned Wtq
Apache-2.0
This model is a small version of TAPAS, specifically fine-tuned on the WikiTable Questions dataset for table-based question answering tasks.
Question Answering System Transformers English
T
google
406
5
Tapas Medium Finetuned Wtq
Apache-2.0
This model is a medium-sized table question answering model based on TAPAS architecture, fine-tuned on WikiTable Questions dataset, suitable for table data QA tasks.
Question Answering System Transformers English
T
google
77
2
Tapas Tiny Finetuned Wtq
Apache-2.0
TAPAS is a tiny Transformer model optimized for table question answering tasks, achieving table comprehension capabilities through intermediate pretraining and chained multi-dataset fine-tuning
Question Answering System Transformers English
T
google
1,894
1
Tapas Mini Finetuned Wtq
Apache-2.0
This model is a mini version based on the TAPAS architecture, specifically fine-tuned for the WikiTable Questions (WTQ) dataset for table question answering tasks.
Question Answering System Transformers English
T
google
35
2
Tapas Base Finetuned Wtq
Apache-2.0
TAPAS is a Transformer-based table question answering model, pre-trained on Wikipedia table data through self-supervised learning and fine-tuned on datasets like WTQ.
Question Answering System Transformers English
T
google
23.03k
217
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase